Using Simulation Tools for Embedded Software Development

نویسنده

  • Jakob Engblom
چکیده

ion vs. Detail A key insight in building simulations is that you must always make a trade-off between simulator detail and the scope of the simulated system. Looking at some extreme cases, you cannot use the same level of abstraction when simulating the evolution of the universe on a grand scale as when simulating protein folding. You can always trade execution time for increased detail or scope, but assuming you want a result in a reasonable time scale, compromises are necessary. A corollary to the abstraction rule is that simulation is a workload that can always use maximum computer performance (unless it is limited by the speed of interaction from the world or users). A faster computer or less detailed model lets you scale up the size of the system simulated or reduce simulation run times. In general, if the processor in your computer is not loaded to 100%, you are not making optimal use of simulation. The high demands for computer power used to be a limiting factor for the use of simulation, requiring large, expensive, and rare supercomputers to be used. Today, however, even the cheapest PC has sufficient computation power to perform relevant simulations in reasonable time. Thus, the availability of computer equipment is not a problem anymore, and simulation should be a tool considered for deployment to every engineer in a development project. Simulating the Environment Simulation of the physical environment is often done for its own sake, without regard for the eventual use of the simulation model by embedded software developers. It is standard practice in mechanical and electrical engineering to design with computer aided tools and simulation. For example, control engineers developing control algorithms for physical systems such as engines or processing plants often build models of the controlled system in tools such as MatLab/Simulink and Labview. These models are then combined with a model of the controller under development, and control properties like stability and performance evaluated. From a software perspective, this is simulating the specification of the embedded software along with the controlled environment. For a space probe, the environment simulation could comprise a model of the planets, the sun, and the probe itself. This model can be used to evaluate proposed trajectories, since it is possible to work through missions of years in length in a very short time. In conjunction with embedded computer simulations, such a simulator would provide data on the attitude and distance to the sun, the amount of power being generated from solar panels, and the positions of stars seen by the navigation sensors. When the mechanical component of an embedded system is potentially dangerous or impractical to work with, you absolutely want to simulate the effects of the software before committing to physical hardware. For example, control software for heavy machinery or military vehicles are best tested in simulation. Also, the number of physical prototypes available is fairly limited in such circumstances, and not something every developer will have at their desk. Such models can be created using modeling tools, or written in C or C++ (which is quite popular in practice). In many cases, environment simulations can be simple data sequences captured from a real sensor or simply guessed by a developer. It should be noted that a simulated environment can be used for two different purposes. One is to provide “typical” data to the computer system simulation, trying to mimic the behavior of the final physical system under normal operating conditions. The other is to provide “extreme” data, corresponding to boundary cases in the system behavior, and “faulty” data corresponding to broken sensors or similar cases outside normal operating conditions. The ability to inject extreme and faulty cases is a key benefit from simulation. Simulating the Human User Interface The human interface portion of an embedded device is often also simulated during its development. For testing user interface ideas, rapid prototyping and simulation is very worthwhile and can be done in many different ways. One creative example is how the creator of the original Palm Pilot used a wooden block to simulate the effect of carrying the device. Instead of building complete implementations of the interface of a TV, mobile phone, or plant control computer, mockups are built in specialized user interface (UI) tools, in Visual Studio GUI builder on a PC, or even PowerPoint or Flash. Sometimes such simulations have complex behaviors implemented in various scripts or even simple prototype software stacks. Only when the UI design is stable do you commit to implementing it in real code for your real device, since this typically implies a greater programming effort. In later phases of development, when the hardware user interface and most of the software user interface is done, a computer simulation of a device needs to provide input and output facilities to make it possible to test software for the device without hardware. This kind of simulation runs the gamut from simple text consoles showing the output from a serial port to graphical simulations of user interface panels where the user can click on switches, turn knobs, and watch feedback on graphical dials and screens. A typical example is Nokia’s Series 60 development kit, which provides a virtual mobile phone with a keypad and small display. Another example is how virtual PC tools like VmWare and Parallels map the display, keyboard, and mouse of a PC to a target system. In consumer electronics, PC peripherals are often used to provide live test data approximating that of a real system. For example, a webcam is a good test data generator for a simulated mobile phone containing a camera. Even if the optics and sensors are different, it still provides something better than static predetermined images. Same for sound capture and playback – you want to hear the sound the machine is making, not just watch the waveform on a display. Simulating the Network Most embedded computers today are connected to one or more networks. These networks can be internal to a system; for example, in a rack-based system, VME, PCI, PCI Express, RapidIO, Ethernet, IC, serial lines, and ATM can be used to connect the boards. In cars, CAN, LIN, FlexRay, and MOST buses connect body electronics, telematics, and control systems. Aircraft control systems communicate over special bus systems like MIL-STD-1553, ARINC 429, and AFDX. Between the external interfaces of systems, Ethernet running internet standards like UDP and TCP is common. Mobile phones connect to headsets and PCs over Bluetooth, USB, and IR, and to cellular networks using UMTS, CDMA2000, GSM, and other standards. Telephone systems have traffic flowing using many different protocols and physical standards like SS7, SONET, SDH, and ATM. Smartcards connect to card readers using contacts or contact-less interfaces. Sensor nodes communicate over standard wireless networks or lower-power, lower-speed interfaces like Zigbee. Thus, existing in an internal or external network is a reality for most embedded systems. Due to the large scale of a typical network, the network part is almost universally simulated to some extent. You simply cannot test a phone switch or router inside its real deployment network, so you have to provide some kind of simulation for the external world. You don’t want to test mobile phone viruses in the live network for very practical reasons. Often, many other nodes on the network are being developed at the same time. Or you might just want to combine point simulations of several networked systems into a single simulated network. Network simulation can be applied at many levels of the networking stack. The picture below shows the most common levels at which network simulation is being performed. The two levels highlighted in green are the ones that are most useful for embedded software work on a concrete target model. Physical signalling Bit stream Packet transmission Network protocol Application protocol High-level application actions Analog signals, bit errors, radio modeling Clocked zeros and ones, CAN with contention, Ethernet with CSMA model Ethernet packets with MAC address, CAN packets, serial characters, VME data read/write TCP/IP etc. FTP, DHCP, SS7, CANopen Load software, configure node, restart Hardware/software boundary r r / ft r r The most detailed modeling level is the physical signal level. Here, the analog properties of the transmission medium and how signals pass through it is modeled. This makes it possible to simulate radio propagation, echoes, and signal degradation, or the electronic interference caused by signals on a CAN bus. It is quite rarely used in the setting of developing embedded systems software, since it complex and provides more details than strictly needed. Bit stream simulation looks at the ones and zeroes transmitted on a bus or other medium. It is possible to detect events like transmission collisions on Ethernet and the resulting back-off, priorities being clocked onto a CAN bus, and signal garbling due to simultaneous transmissions in radio networks. An open example of such a simulator is the VMNet simulator for sensor networks. Considering the abstraction levels for computer system simulation discussed below, this is at an abstraction level similar to cycle-accurate simulation. Another example is the simulation of the precise clock-by-clock communication between units inside a system-on-a-chip. Packet transmission passes entire packets around, where the definition of a packet depends on the network type. In Ethernet, packets can be up to 65kB large, while serial lines usually transmit single bytes in each “packet”. It is the network simulation equivalent of transaction-level modeling, as discussed below for computer boards. The network simulation has no knowledge of the meaning of the packets. It just passes opaque blobs of bits around. The software on the simulated system interacts with some kind of virtual network interface, programming it just like a real network device. This level is quite scalable in terms of simulation size, and is also an appropriate level at which to connect real and simulated networks. Common PC virtualization software like VMware operates at this level, as do embedded-systems virtualization tools from Virtutech, Synopsys, and VaST. Ignoring the actual structure of packets on the network, networks are often simulated at the level of network protocols like TCP/IP. The simulated nodes use some socket-style API to send traffic into a simulated network rather than a real network. Such a simulation becomes independent of the actual medium used, and can scale to very large networks. The network research tool NS2 operates at this level, for example. It is also a natural network model when using API-level simulation of the software, as discussed below. Application protocol simulation simulates the behavior of network services and other nodes. Tools simulate both of the network protocols used and the application protocols built on top of them. Such simulation tools embody significant knowledge of the function of real-world network nodes or network subsystems. They offer the ability to test individual network nodes in an intelligent interactive environment, a concept often known as rest-of-network simulation. Vector Informatik’s CAN tools are a typical example of such tools. Finally, some high-level simulations of networked systems work at the level of application actions. In this context, we do not care about how network traffic is delivered, just about the activities they result in. It is a common mode when designing systems at the highest level, for example in UML models. The level of abstraction to choose depends on your requirements, and it is often the case that several types of simulators are combined in a single simulation setup. The picture below shows a complex setup that forms a superset of most real-world deployments. Some simulated nodes running the real software for the embedded system. A rest-of-network simulator providing the illusion of many more nodes on the network. A simple traffic generator that just injects packets according to some kind of randomized model. An instrumentation module that peeks on traffic without being visible on the network, showing the advantage of simulation in inspection. A connection to the real-world network on which some real systems are found, communicating with the simulated systems. Real-world network test machines are the types of specialized equipment used today to provide testing of physical network nodes. Thanks to a real-network bridge, they can also be used with simulated systems. Simulating the Computer System and its Software Now we get to the core of the system: the board containing one or more processors, and the software running on that board. Since the computer board hardware and its software are so closely related, we consider the simulation of them together. The central goal is to use a PC or workstation to develop and test software for the target embedded system, without resorting to using physical hardware. The picture below illustrates the most common levels of simulation from a software perspective: Operating system User program Middleware DB Java VM Drivers Boot firmware Hardware abstraction layer “Java is java”: simulate using some common middleware/API set “ i j ”: i l t i i l r / I t Classic OS API simulation: compile to PC shost, use special implementation of OS API l i I i l ti : il t t, i l i l t ti f I Low-level API-level simulation: special device drivers and HAL for PC simulation, compile to host including the kernel -l l I-l l i l ti : i l i ri r f r i l ti , il t t i l i t r l “Full-system simulation”: simulate the hardware/software interface, unmodifed software stack from target compilation “ llt i l ti ”: i l t t r r / ft r i t rf , if ft r t fr t r t il ti It sometimes makes sense to program and test embedded software against an API also found on the workstation. Java programs and programs using some specific database API or a standard middleware like CORBA can sometimes be tested in this manner. The assumption is that the behavior of these high-level APIs will be sufficiently similar on the host and target systems. This is not necessarily true; for example, the behavior of a Java machine on a mobile phone is not likely to be the same as on a desktop, due to memory and processing power constraints, and different input and output devices. It is also possible to simulate some aspects of user programs with no model of the target system at all. For example, control models as discussed above are interesting to study regardless of the precise details of the target system software stack. Using UML and state charts to model pieces of software makes it possible to simulate the software on its own, without considering much of the target at all. Minimal assumptions are made about the properties of the underlying real-time operating system and target in these models. A common and popular level of simulation is API-level simulation of a target operating system. This is also known as host-compiled simulation, since the embedded code is compiled to run on the host and not on the target system. This type of simulation is popular since it is conceptually simple and easy to understand, and is something that an embedded developer can basically create on his or her own in incremental steps. It also allows developers to use popular programming and debug tools available in a desktop environment, like Visual Studio, Purify and Valgrind. It also has several drawbacks. First, it often ends up being quite expensive to maintain the API-level simulation and separate compilation setup required. Second, the behavior of the simulated system will differ in subtle but important details from the real thing. Details like the precise scheduling of processes, size of memory, behavior of memory protection, availability of global variables, and target compiler peculiarities can make code that runs just fine in simulation break on the real target. Third, it is impossible to make use of code only available as a target binary. Fourth, complex actions involving the hardware and low-level software, such as rebooting the system, are very hard to represent. Most embedded operating-system vendors offer some form of host-compiled simulation. For example, WindRiver’s VxSim and Enea’s OSE Soft Kernel fit in this category. There are also many in-house implementations of API-level simulations, both for in-house and externally sourced operating systems. The experience with such tools range from the very successful, where minimal debugging needs to be done on the target after the switch-over to the real target, to utter failures where some approximation in the API-level simulation did not work, and the code had to be extensively debugged on the physical target. Trying to resolve some of the issues with API-level simulation, sometimes the hardware-independent part of the embedded operating system is used, together with a simulation of the hardware-dependent parts. This is basically paravirtualization, where the device drivers and hardware-abstraction level of an operating system is replaced with simplified code interacting with the host operating system rather than actual hardware. A paravirtual solution provides better insight into the behavior of the operating system kernel and services, but is still compiled to run on the host PC. It also requires access to the operating-system source code, and a programming effort corresponding to creating a new boardsupport package (BSP). Standalone instruction-set simulators (ISS) are common and established in embedded compiler and debug toolsets. Such simulators usually simulate only the user-level instruction set of a processor, and lets simple programs that do not really do I/O be run on the host. Modeling of peripheral devices is typically limited to providing sequences of bytes to be read from certain memory locations. Operating systems can not be run on such simulators, since more complex and necessary devices like timers and serial ports are missing. The simulators are also typically quite slow, since there is little value gained from making them faster. IDEs from vendors like IAR and GreenHills are typical examples of development tools including a basic ISS. Full-System Simulation Finally, the problem can be attacked at the hardware/software interface level. Here, a virtual target system is created which runs the same binary stack (from boot firmware to device drivers and operating system) as the physical target system. This is achieved by simulating the instruction set of the actual target processor, along with the programming interface of the peripheral devices in the system and their behavior. The technical term for this is full-system simulation, since you are indeed simulating the full system and not just the processor. It is also the main technology that we will focus on in this paper. The crucial and defining property of this type of simulation is that all software is the same as on the physical system, using the same device drivers and BSP code. This means that the type of processor, memory sizes, processor speeds, device memory map, and other aspects have to precisely match the target. It is common practice to develop the device drivers and BSP ports using virtual target systems in cases where the physical hardware is not yet available. We also avoid the need to maintain a separate build setup and build chain for simulation, since it is the same as used for the physical target. Note that with virtual target systems, target properties like memory-management units, supervisor and hypervisor modes, endianness, word size, and floating-point precision are simulated and visible. To allow software to run completely unmodified, the model has to model the properties and behavior of the system in a way that corresponds to how the hardware works, at the level where the software talks to the hardware. At the hardware/software boundary, the devices in the system show up as registers at certain locations in the processor address space. Device drivers in the software read and write these addresses to make the devices perform. A register map (also called the device front end) can be very simple like a few bytes of registers used to control an NS16550 serial port. It can be spread out across the memory map of the processor like a PCI device which is mapped into both configuration space and memory space. The register map can also be very large, like that of a Freescale MPC 8260 CPM which has thousands of registers. Typically there will also be a back-end part of device, where interactions with the environment are handled. For example, a model of the buttons on a watch has to provide some means to “press” the buttons. A screen device needs to display what is being drawn on the host machine screen, and a network device needs to send and receive packets. A multiprocessor system controller will need to interrupt other processors in the system. Devices also often need to talk to other devices, for example to raise interrupts in an interrupt controller or fetch data from memory systems. The front-end memory-map access could be considered a variant of this, but it is usually handled as special cases to improve performance. Similarly, the backend interface is also access to “other hardware”, but since this hardware accesses the external world it is special-cased to provide control and isolation for the simulation. Behind the register map, the behavior part of the device model defines what the device does when it is activated. At a minimum this needs to have enough functionality that the software will work. More functionality can be added to allow fault injection, system behavior logging, and other bonuses. This part of the model interfaces to the simulator kernel, in order to things like post events in time, check the current time, initialize connections to other devices, and all other housekeeping tasks. It also orchestrates the model behavior at its other interfaces. Here is an example of the four interfaces for a serial device:

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Development of system decision support tools for behavioral trends monitoring of machinery maintenance in a competitive environment

The article is centred on software system development for manufacturing company that produces polyethylene bags using mostly conventional machines in a competitive world where each business enterprise desires to stand tall. This is meant to assist in gaining market shares, taking maintenance and production decisions by the dynamism and flexibilities embedded in the package as customers’ demand ...

متن کامل

Fast Trace Generation of Many-Core Embedded Systems with Native Simulation

Embedded Software development and optimization are complex tasks. Late availably of hardware platforms, their usual low visibility and controllability, and their limiting resource constraints makes early performance estimation an attractive option instead of using the final execution platform. With early performance estimation, software development can progress although the real hardware is not...

متن کامل

Demo Abstract: The ESMoL Modeling Language and Tools for Synthesizing and Simulating Real-Time Embedded Systems

High-confidence embedded real-time designs stretch the demands placed on design and development tools. We will demonstrate the design and testing of an embedded control system built using the ESMoL modeling language and supporting tools. ESMoL adds distributed deployment concepts to Simulink designs, and integrates scheduling analysis as well as platformspecific simulation. The testing system i...

متن کامل

The Real-time Debug of Embedded Systems Using Dynamic On-chip Emulation

System-on-chip (SoC) technology has ushered in a new era of design and test methodologies. Of particular interest in this paper is the SoC application specific integrated circuit (ASIC) that incorporates a deeply embedded microprocessor (μP) and typically a digital signal processor (DSP). Whereas the rapid development and simulation of these systems and their associated software is fully suppor...

متن کامل

An Architecture Description Language for Embedded Hardware Platforms

Embedded software development relies on various tools – compilers, simulators, execution time estimators – that encapsulate a more-or-less detailed knowledge of the target hardware platform. These tools can be costly to develop and maintain: significant benefits could be expected if they were automatically generated from models expressed in a dedicated modeling language. In contrast with Hardwa...

متن کامل

Observability in Multiprocessor Real-Time Systems with Hardware/Software Co-Simulation

As an alternative to traditional software debuggers and hardware logic simulators, hardware/software coverification tools have been introduced in novel design processes for the embedded systems market. The main idea behind co-verification is to reduce design time by enabling an early integration of hardware and software development. However, with this approach, several new aspects on software d...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2008